As a special type of transformer, Vision Transformers (ViTs) are used to various computer vision applications (CV), such as image recognition. There are several potential problems with convolutional neural networks (CNNs) that can be solved with ViTs. For image coding tasks like compression, super-resolution, segmentation, and denoising, different variants of the ViTs are used. The purpose of this survey is to present the first application of ViTs in CV. The survey is the first of its kind on ViTs for CVs to the best of our knowledge. In the first step, we classify different CV applications where ViTs are applicable. CV applications include image classification, object detection, image segmentation, image compression, image super-resolution, image denoising, and anomaly detection. Our next step is to review the state-of-the-art in each category and list the available models. Following that, we present a detailed analysis and comparison of each model and list its pros and cons. After that, we present our insights and lessons learned for each category. Moreover, we discuss several open research challenges and future research directions.
translated by 谷歌翻译
Dynamical systems are found in innumerable forms across the physical and biological sciences, yet all these systems fall naturally into universal equivalence classes: conservative or dissipative, stable or unstable, compressible or incompressible. Predicting these classes from data remains an essential open challenge in computational physics at which existing time-series classification methods struggle. Here, we propose, \texttt{phase2vec}, an embedding method that learns high-quality, physically-meaningful representations of 2D dynamical systems without supervision. Our embeddings are produced by a convolutional backbone that extracts geometric features from flow data and minimizes a physically-informed vector field reconstruction loss. In an auxiliary training period, embeddings are optimized so that they robustly encode the equations of unseen data over and above the performance of a per-equation fitting method. The trained architecture can not only predict the equations of unseen data, but also, crucially, learns embeddings that respect the underlying semantics of the embedded physical systems. We validate the quality of learned embeddings investigating the extent to which physical categories of input data can be decoded from embeddings compared to standard blackbox classifiers and state-of-the-art time series classification techniques. We find that our embeddings encode important physical properties of the underlying data, including the stability of fixed points, conservation of energy, and the incompressibility of flows, with greater fidelity than competing methods. We finally apply our embeddings to the analysis of meteorological data, showing we can detect climatically meaningful features. Collectively, our results demonstrate the viability of embedding approaches for the discovery of dynamical features in physical systems.
translated by 谷歌翻译
EAR识别系统已被广泛研究,而耳识别系统只有几个耳朵呈现攻击检测方法,因此,没有公开的耳朵呈现攻击检测(PAD)数据库。在本文中,我们提出了一种使用预先训练的深神经网络的焊盘方法,并释放名为华沙理工大学的新数据集进行演示攻击检测(WUT-EAR V1.0)。没有使用移动设备捕获的耳朵数据库。因此,我们捕获了超过8500个真正的耳朵图像,从134个受试者和超过8500个假耳朵图像使用。我们用3个不同的移动设备进行了重放攻击和照片打印攻击。我们的方法在重放攻击数据库上分别实现了半误差率(HTER)和攻击演示分类错误速率(APPer)的半总差错率(HTER)和0.08%。分析捕获的数据并在统计上进行了分析和可视化,以了解其重要性并使其成为进一步研究的基准。已经发现了一种用于耳输识别系统,公开的耳朵图像和耳垫数据集的安全焊盘方法。该代码和评估结果在https://github.com/jalilnkh/kartalolool-ear-pad上公开使用。
translated by 谷歌翻译
由于长距离,照明变化,有限的用户合作和移动科目,虹膜分割和定位在不受约束环境中具有挑战性。为了解决这个问题,我们介绍了一个U-Net,具有预先培训的MobileNetv2深神经网络方法。我们使用MobileNetv2的预先训练的权重,用于想象成数据集,并在虹膜识别和本地化域上进行微调。此外,我们推出了一个名为Kartalol的新数据集,以更好地评估虹膜识别方案中的检测器。为了提供域适应,我们可以在Casia-Iris-Asia,Casia-Iris-M1和Casia-Iris-Africa和Casia-Iris-Africa和我们的数据集中微调MobileNetv2模型。我们还通过执行左右翻转,旋转,缩放和亮度来增强数据。我们通过迭代所提供的数据集中的图像来选择二进制掩码的二值化阈值。沿着Kartalol DataSet,Casia-Iris-Asia,Casia-Iris-M1,Casia-Iris-M1,Casia-Iris-M1,Casia-Iris-M1,Casia-Iris-M1,Casia-Iris-M1培训。实验结果强调了我们的方法在基于移动的基准上超越了最先进的方法。代码和评估结果在https://github.com/jalilnkh/kartalol-nir -isl2021031301上公开可用。
translated by 谷歌翻译
研究的目的:在生物社区,可见人类的特征是普遍和可行的验证和识别移动设备上。然而,驾驶员能够通过创造假人和人工生物识别来欺骗系统来欺骗这些特征。可见的生物识别系统遭遇了呈现攻击的高安全性风险。方法:在此期间,基于挑战的方法,特别是视线跟踪和瞳孔动态似乎比别人接触生物系统更加安全的方法。我们审查了探索凝视跟踪和瞳孔动态活力检测的现有工作。主要结果:本研究分析了视线跟踪和瞳孔动态演示攻击的各个方面,如国家的最先进的活跃度检测算法,各种文物,公共数据库的可访问性和标准化的在这方面的总结。此外,我们讨论了未来的工作和开放挑战,以基于基于挑战的系统创造安全的活力检测。
translated by 谷歌翻译
将一致的时间标识符分配给视频序列中的多个移动对象是一个具有挑战性的问题。该问题的解决方案将在多个对象跟踪和分段问题中具有立即的分支。我们提出了一种将时间识别任务视为一种时空聚类问题的策略。我们提出了一种使用我们呼叫深度异构的AutoEncoder的卷积和完全连接的AutoEncoder的无监督学习方法,以了解来自分段掩码和检测边界框的歧视特征。我们从预训练的实例分段网络中提取掩码和它们相应的边界框,并使用依赖于任务的不确定性权重培训AutoEncoders以生成共同的潜在功能。然后,我们构建约束图,该图促进满足一组已知时间条件的对象之间的关联。然后将特征向量和约束图提供给kmeans聚类算法,以分离潜像中的相应数据点。我们使用挑战合成和现实世界多对象视频数据集评估我们的方法的性能。我们的结果表明,我们的技术优于几种最先进的方法。
translated by 谷歌翻译
Unsupervised learning-based anomaly detection in latent space has gained importance since discriminating anomalies from normal data becomes difficult in high-dimensional space. Both density estimation and distance-based methods to detect anomalies in latent space have been explored in the past. These methods prove that retaining valuable properties of input data in latent space helps in the better reconstruction of test data. Moreover, real-world sensor data is skewed and non-Gaussian in nature, making mean-based estimators unreliable for skewed data. Again, anomaly detection methods based on reconstruction error rely on Euclidean distance, which does not consider useful correlation information in the feature space and also fails to accurately reconstruct the data when it deviates from the training distribution. In this work, we address the limitations of reconstruction error-based autoencoders and propose a kernelized autoencoder that leverages a robust form of Mahalanobis distance (MD) to measure latent dimension correlation to effectively detect both near and far anomalies. This hybrid loss is aided by the principle of maximizing the mutual information gain between the latent dimension and the high-dimensional prior data space by maximizing the entropy of the latent space while preserving useful correlation information of the original data in the low-dimensional latent space. The multi-objective function has two goals -- it measures correlation information in the latent feature space in the form of robust MD distance and simultaneously tries to preserve useful correlation information from the original data space in the latent space by maximizing mutual information between the prior and latent space.
translated by 谷歌翻译
The Internet of Things (IoT) is a system that connects physical computing devices, sensors, software, and other technologies. Data can be collected, transferred, and exchanged with other devices over the network without requiring human interactions. One challenge the development of IoT faces is the existence of anomaly data in the network. Therefore, research on anomaly detection in the IoT environment has become popular and necessary in recent years. This survey provides an overview to understand the current progress of the different anomaly detection algorithms and how they can be applied in the context of the Internet of Things. In this survey, we categorize the widely used anomaly detection machine learning and deep learning techniques in IoT into three types: clustering-based, classification-based, and deep learning based. For each category, we introduce some state-of-the-art anomaly detection methods and evaluate the advantages and limitations of each technique.
translated by 谷歌翻译
Pneumonia, a respiratory infection brought on by bacteria or viruses, affects a large number of people, especially in developing and impoverished countries where high levels of pollution, unclean living conditions, and overcrowding are frequently observed, along with insufficient medical infrastructure. Pleural effusion, a condition in which fluids fill the lung and complicate breathing, is brought on by pneumonia. Early detection of pneumonia is essential for ensuring curative care and boosting survival rates. The approach most usually used to diagnose pneumonia is chest X-ray imaging. The purpose of this work is to develop a method for the automatic diagnosis of bacterial and viral pneumonia in digital x-ray pictures. This article first presents the authors' technique, and then gives a comprehensive report on recent developments in the field of reliable diagnosis of pneumonia. In this study, here tuned a state-of-the-art deep convolutional neural network to classify plant diseases based on images and tested its performance. Deep learning architecture is compared empirically. VGG19, ResNet with 152v2, Resnext101, Seresnet152, Mobilenettv2, and DenseNet with 201 layers are among the architectures tested. Experiment data consists of two groups, sick and healthy X-ray pictures. To take appropriate action against plant diseases as soon as possible, rapid disease identification models are preferred. DenseNet201 has shown no overfitting or performance degradation in our experiments, and its accuracy tends to increase as the number of epochs increases. Further, DenseNet201 achieves state-of-the-art performance with a significantly a smaller number of parameters and within a reasonable computing time. This architecture outperforms the competition in terms of testing accuracy, scoring 95%. Each architecture was trained using Keras, using Theano as the backend.
translated by 谷歌翻译
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world, and early DR detection is necessary to prevent vision loss and support an appropriate treatment. In this work, we leverage interactive machine learning and introduce a joint learning framework, termed DRG-Net, to effectively learn both disease grading and multi-lesion segmentation. Our DRG-Net consists of two modules: (i) DRG-AI-System to classify DR Grading, localize lesion areas, and provide visual explanations; (ii) DRG-Expert-Interaction to receive feedback from user-expert and improve the DRG-AI-System. To deal with sparse data, we utilize transfer learning mechanisms to extract invariant feature representations by using Wasserstein distance and adversarial learning-based entropy minimization. Besides, we propose a novel attention strategy at both low- and high-level features to automatically select the most significant lesion information and provide explainable properties. In terms of human interaction, we further develop DRG-Net as a tool that enables expert users to correct the system's predictions, which may then be used to update the system as a whole. Moreover, thanks to the attention mechanism and loss functions constraint between lesion features and classification features, our approach can be robust given a certain level of noise in the feedback of users. We have benchmarked DRG-Net on the two largest DR datasets, i.e., IDRID and FGADR, and compared it to various state-of-the-art deep learning networks. In addition to outperforming other SOTA approaches, DRG-Net is effectively updated using user feedback, even in a weakly-supervised manner.
translated by 谷歌翻译